Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Contrastive hypergraph transformer for session-based recommendation
Weichao DANG, Bingyang CHENG, Gaimei GAO, Chunxia LIU
Journal of Computer Applications    2023, 43 (12): 3683-3688.   DOI: 10.11772/j.issn.1001-9081.2022111654
Abstract226)   HTML15)    PDF (1447KB)(213)       Save

A Contrastive Hypergraph Transformer for session-based recommendation (CHT) model was proposed to address the problems of noise interference and sample sparsity in the session-based recommendation itself. Firstly, the session sequence was modeled as a hypergraph. Secondly, the global context information and local context information of items were constructed by the hypergraph transformer. Finally, the Item-Level (I-L) encoder and Session-Level (S-L) encoder were used on global relationship learning to capture different levels of item embeddings, the information fusion module was used to fuse item embedding and reverse position embedding, and the global session representation was obtained by the soft attention module while the local session representation was generated with the help of the weight line graph convolutional network on local relationship learning. In addition, a contrastive learning paradigm was introduced to maximize the mutual information between the global and local session representations to improve the recommendation performance. Experimental results on several real datasets show that the recommendation performance of CHT model is better than that of the current mainstream models. Compared with the suboptimal model S2-DHCN (Self-Supervised Hypergraph Convolutional Networks), the proposed model has the P@20 of 35.61% and MRR@20 of 17.11% on Tmall dataset, which are improved by 13.34% and 13.69% respectively; the P@20 reached 54.07% and MRR@20 reached 18.59% on Diginetica dataset, which are improved by 0.76% and 0.43% respectively; verifying the effectiveness of the proposed model.

Table and Figures | Reference | Related Articles | Metrics
UAV path planning for persistent monitoring based on value function iteration
Chen LIU, Yang CHEN, Hao FU
Journal of Computer Applications    2023, 43 (10): 3290-3296.   DOI: 10.11772/j.issn.1001-9081.2022091464
Abstract198)   HTML3)    PDF (2422KB)(102)       Save

The use of Unmanned Aerial Vehicle (UAV) to continuously monitor designated areas can play a role in deterring invasion and damage as well as discovering abnormalities in time, but the fixed monitoring rules are easy to be discovered by the invaders. Therefore, it is necessary to design a random algorithm for UAV flight path. In view of the above problem, a UAV persistent monitoring path planning algorithm based on Value Function Iteration (VFI) was proposed. Firstly, the state of the monitoring target point was selected reasonably, and the remaining time of each monitoring node was analyzed. Secondly, the value function of the corresponding state of this monitoring target point was constructed by combining the reward/penalty benefit and the path security constraint. In the process of the VFI algorithm, the next node was selected randomly based on ε principle and roulette selection. Finally, with the goal that the growth of the value function of all states tends to be saturated, the UAV persistent monitoring path was solved. Simulation results show that the proposed algorithm has the obtained information entropy of 0.905 0, and the VFI running time of 0.363 7 s. Compared with the traditional Ant Colony Optimization (ACO), the proposed algorithm has the information entropy increased by 216%, and the running time decreased by 59%,both randomness and rapidity have been improved. It is verified that random UAV flight path is of great significance to improve the efficiency of persistent monitoring.

Table and Figures | Reference | Related Articles | Metrics
Flight delay prediction model based on Conv-LSTM with spatiotemporal sequence
Jingyi QU, Liu YANG, Xuyang CHEN, Qian WANG
Journal of Computer Applications    2022, 42 (10): 3275-3282.   DOI: 10.11772/j.issn.1001-9081.2021091613
Abstract410)   HTML12)    PDF (3421KB)(169)       Save

The accurate flight delay prediction results can provide a great reference value for the prevention of large-scale flight delays. The flight delays prediction is a time-series prediction in a specific space, however most of the existing prediction methods are the combination of two or more algorithms, and there is a problem of fusion between algorithms. In order to solve the problem above, a Convolutional Long Short-Term Memory (Conv-LSTM) network flight delay prediction model was proposed that considers the temporal and spatial sequences comprehensively. In this model, on the basis that the temporal features were extracted by Long Short-Term Memory (LSTM) network, the input of the network and the weight matrix were convolved to extract spatial features, thereby making full use of the temporal and spatial information contained in the dataset. Experimental results show that the accuracy of the Conv-LSTM model is improved by 0.65 percentage points compared with LSTM, and it is 2.36 percentage points higher than that of the Convolutional Neural Network (CNN) model that only considers spatial information. It can be seen that with considering the temporal and spatial characteristics at the same time, more accurate prediction results can be obtained in the flight delay problem. In addition, based on the proposed model, a flight delay analysis system based on Browser/Server (B/S) architecture was designed and implemented, which can be applied to the air traffic administration flow control center.

Table and Figures | Reference | Related Articles | Metrics
Multi-graph neural network-based session perception recommendation model
NAN Ning, YANG Chengyi, WU Zhihao
Journal of Computer Applications    2021, 41 (2): 330-336.   DOI: 10.11772/j.issn.1001-9081.2020060805
Abstract548)      PDF (1052KB)(521)       Save
The session-based recommendation algorithms mainly rely on the information from the target session, but fail to fully utilize the collaborative information from other sessions. In order to solve this problem, a Multi-Graph neural network-based Session Perception recommendation (MGSP) model was proposed. Firstly, according to the target session and all sessions in the training set, Item-Transition Graph (ITG) and Collaborative Relation Graph (CRG) were constructed. Based on these two graphs, the Graph Neural Network (GNN) was applied to aggregate the information of the nodes in order to obtain two types of node representations. Then, after the two-layer attention module modelling two type node representations, the session-level representation was obtained. Finally, by using the attention mechanism to fuse the information, the ultimate session representation was gained, and the next interaction item was predicted. The comparison experiments were carried out in two scenarios of e-commerce and civil aviation. Experimental results show that, the proposed algorithm is superior to the optimal benchmark model, with an increase of more than 1 percentage point and 3 percentage point in the indicators on the e-commerce and civil aviation datasets respectively, verifying the effectiveness of the proposed model.
Reference | Related Articles | Metrics
Research advances in disentangled representation learning
Keyang CHENG, Chunyun MENG, Wenshan WANG, Wenxi SHI, Yongzhao ZHAN
Journal of Computer Applications    2021, 41 (12): 3409-3418.   DOI: 10.11772/j.issn.1001-9081.2021060895
Abstract1090)   HTML144)    PDF (877KB)(498)       Save

The purpose of disentangled representation learning is to model the key factors that affect the form of data, so that the change of a key factor only causes the change of data on a certain feature, while the other features are not affected. It is conducive to face the challenge of machine learning in model interpretability, object generation and operation, zero-shot learning and other issues. Therefore, disentangled representation learning always be a research hotspot in the field of machine learning. Starting from the history and motives of disentangled representation learning, the research status and applications of disentangled representation learning were summarized, the invariance, reusability and other characteristics of disentangled representation learning were analyzed, and the research on the factors of variation via generative entangling, the research on the factors of variation with manifold interaction, and the research on the factors of variation using adversarial training were introduced, as well as the latest research trends such as a Variational Auto-Encoder (VAE) named β-VAE were introduced. At the same time, the typical applications of disentangled representation learning were shown, and the future research directions were prospected.

Table and Figures | Reference | Related Articles | Metrics
Distributed rough set attribute reduction algorithm under Spark
Xiajie ZHANG, Jinghua ZHU, Yang CHEN
Journal of Computer Applications    2020, 40 (2): 518-523.   DOI: 10.11772/j.issn.1001-9081.2019091642
Abstract434)   HTML3)    PDF (560KB)(294)       Save

Attribute reduction (feature selection) is an important part of data preprocessing. Most of attribute reduction methods use attribute dependence as the criterion for filtering attribute subsets. A Fast Dependence Calculation (FDC) method was designed to calculate the dependence by directly searching for the objects based on relative positive domains. It is not necessary to find the relative positive domain in advance, so that the method has a significant performance improvement in speed compared with the traditional methods. In addition, the Whale Optimization Algorithm (WOA) was improved to make the calculation method effective for rough set attribute reduction. Combining the above two methods, a distributed rough set attribute reduction algorithm based on Spark named SP-WOFRST was proposed, which was compared with a Spark-based rough set attribute reduction algorithm named SP-RST on two synthetical large data sets. Experimental results show that the proposed SP-WOFRST algorithm is superior to SP-RST in accuracy and speed.

Table and Figures | Reference | Related Articles | Metrics
Optimized algorithm for k-step reachability queries on directed acyclic graphs
Ming DU, Anping YANG, Junfeng ZHOU, Ziyang CHEN, Yun YANG
Journal of Computer Applications    2020, 40 (2): 426-433.   DOI: 10.11772/j.issn.1001-9081.2019081605
Abstract433)   HTML0)    PDF (654KB)(365)       Save

The k-step reachability query is used to answer whether there exists a path between two nodes with length no longer than k in a Directed Acyclic Graph (DAG). Concerning the problems of large index size and low query processing efficiency of existing approaches, a bi-directional shortest path index based on partial nodes was proposed to improve the coverage of reachable queries, and a set of optimization rules was proposed to reduce the index size. Then, a bi-directional reversed topological index was proposed to accelerate the unreachable queries answering based on the simplified graph. Finally, the farthest-node-first-visiting bi-traversal strategy was proposed to improve the efficiency of query processing. Experimental results on 21 real datasets, such as citation networks and social networks, show that compared with existing efficient approaches including PLL (Pruned Landmark Labeling) and BFSI-B (Breadth First Search Index-Bilateral), the proposed algorithm has smaller index size and higher query response speed.

Table and Figures | Reference | Related Articles | Metrics
Survey of large-scale resource description framework data partitioning methods in distributed environment
YANG Cheng, LU Jiamin, FENG Jun
Journal of Computer Applications    2020, 40 (11): 3184-3191.   DOI: 10.11772/j.issn.1001-9081.2020040539
Abstract410)      PDF (623KB)(431)       Save
With the rapid development of knowledge graph and its wide usage in various vertical domains, the requirements for efficient processing of Resource Description Framework (RDF) data has increasingly become a new topic in the field of modern big data management. RDF is a data model proposed by W3C to describe knowledge graph entities and inter-entity relationships. In order to effectively cope with the storage and query of the large-scale RDF data, many scholars consider managing RDF data in a distributed environment. The key problem faced by the distributed storage of RDF data is data partitioning, and the performance of Simple Protocol and RDF Query Language (SPARQL) queries is largely determined by the results of partitioning. From the perspective of data partitioning, two types:graph structure-based RDF data partitioning methods and semantics-based RDF data partitioning methods, were mainly focused on and described in depth. The former include multi-granularity hierarchical partitioning, template partitioning and clustering partitioning, and are suitable for the wide semantic categories scenes of general domain query, while the latter include hash partitioning, vertical partitioning and pattern partitioning, and are more suitable for the environments of the relatively fixed semantic categories of vertical domain query. In addition, several typical partitioning methods were compared and analyzed to provide enlightenment for the future research on RDF data partitioning methods. Finally, the future research directions of RDF data partitioning methods were summarized.
Reference | Related Articles | Metrics
Support vector data description method based on probability
YANG Chen, WANG Jieting, LI Feijiang, QIAN Yuhua
Journal of Computer Applications    2019, 39 (11): 3134-3139.   DOI: 10.11772/j.issn.1001-9081.2019050823
Abstract410)      PDF (849KB)(175)       Save
In view of the high complexity of current probabilistic machine learning methods in solving probability problems, and the fact that traditional Support Vector Data Description (SVDD), as a kernel density estimation method, can only estimate whether the test samples belong to this class, a probability-based SVDD method was proposed. Firstly, the traditional SVDD method was used to obtain the data descriptions of two types of data, and the distance between the test sample and the hypersphere was calculated. Then, a function was constructed to convert the distance into probability, and an SVDD method based on probability was proposed. At the same time, Bagging algorithm was used for the integration to further improve the performance of data description. By referring to classification scenarios, the proposed method was compared with the traditional SVDD method on 13 kinds of benchmark datasets of Gunnar Raetsch. The experimental results show that the proposed method is better than the traditional SVDD method on accuracy and F1-value, and its performance of data description is improved.
Reference | Related Articles | Metrics
Click through rate prediction algorithm based on user's real-time feedback
YANG Cheng
Journal of Computer Applications    2017, 37 (10): 2866-2870.   DOI: 10.11772/j.issn.1001-9081.2017.10.2866
Abstract936)      PDF (780KB)(627)       Save
At present, most of the Click Through Rate (CTR) prediction algorithms for online advertising mainly focus on mining the correlation between users and advertisements from large-scale log data by using machine learning methods, but not considering the impact of user's real-time feedback. After analyzing a lot of real world online advertising log data, it is found that the dynamic changes of CTR is highly correlated with previous feedback of user, which is that the different behaviors of users typically have different effects on real-time CTR. On the basis of the above analysis, an algorithm based on user's real-time feedback was proposed. Firstly, the correlation between user's feedback and real-time CTR were quantitatively analyzed on large scale of real world online advertising logs. Secondly, based on the analysis results, the user's feedback was characterized and fed into machine learning model to model the user's behavior. Finally, the online advertising impression was dynamically adjusted by user's feedback, which improves the precision of CTR prediction. The experimental results on real world online advertising datasets show that the proposed algorithm improves the precision of CTR prediction significantly, compared with the contrast models, the metrics of Area Under the ROC Curve (AUC) and Relative Information Gain (RIG) are increased by 0.83% and 6.68%, respectively.
Reference | Related Articles | Metrics
Optimal strategy for production-distribution network of perishable products based on WCVaR
ZHANG Lei, YANG Chenghu, LU Meijin
Journal of Computer Applications    2015, 35 (2): 566-571.   DOI: 10.11772/j.issn.1001-9081.2015.02.0566
Abstract466)      PDF (1090KB)(369)       Save

According to partially known probability distribution of demand information on the production-distribution network of perishable products, WCVaR (Worst-Case Conditional Value-at-Risk) was introduced to measure the risk. On the basis of considering the effect of factors, such as production, logistics distribution, transportation path etc, on production cost, transportation cost, storage cost and loss of stockout, an optimization model with minimum WCVaR at certain service level was proposed. And then the best optimization strategy was realized by minimizing tail risk loss of production-distribution network. The numerical simulation results show that the WCVaR method can handle the uncertainty with more volatility and has more excellent stability, compared with the robust optimization method. When the demand obeys mixed distribution, the optimization problem of production-distribution network with uncertainty can be well solved with WCVaR optimization model.

Reference | Related Articles | Metrics
Integer discrete cosine transform algorithm for distributed video coding framework
WANG Yanming CHEN Bo GAO Xiaoming YANG Cheng
Journal of Computer Applications    2014, 34 (10): 2948-2952.   DOI: 10.11772/j.issn.1001-9081.2014.10.2948
Abstract296)      PDF (915KB)(368)       Save

Now the integer Discrete Cosine Transform (DCT) algorithm of H.264 can not apply to Distributed Video Coding (DVC) framework directly because of its high complexity. In view of this, the authors presented a integer DCT algorithm and transform radix generating method based on fixed long step quantization which length was 2x (x was a plus integer). The transform radix in H.264 could be stretched. The authors took full advantage of this feature to find transform radix which best suits for working principle of hardware, and it moved the contracted-quantized stage from coder to decoder to reduced complexity of coder under the premise of "small" transform radix. In the process of "moving", this algorithm guaranteed image quality by saturated amplification for DCT coefficient, guaranteed reliability by overflow upper limit, and improved compression performance by reducing radix error. The experimental results show that, compared with corresponding module in H.264, the quantization method of this algorithm is convenient for bit-plane extraction. And it reduces calculating work of contracted-quantized stage of coder to 16 times of integer constant addition under the premise of quasi-lossless compression, raises the ratio of image quality and compression by 0.239. This algorithm conforms to DVC framework.

Reference | Related Articles | Metrics
Segmentation of cell two-photon microscopic image based on center location algorithm
HU Hengyang CHEN Guannan WANG Ping LIU Yao
Journal of Computer Applications    2013, 33 (09): 2694-2697.   DOI: 10.11772/j.issn.1001-9081.2013.09.2694
Abstract659)      PDF (701KB)(412)       Save
Complex background, critical noise and fuzzy boundary made the performance of the available cell image segmentation methods disappointing. Thus, a new method that can locate and detect nucleus effectively was proposed in this paper. A coarse-to-fine segmentation strategy was adopted to extract the edge of nucleus gradually. First, by using C-means clustering algorithm, the image was divided to three parts: nucleus, cytoplasm and cell intercellular substance. Second, the center of cell was located by calculating the circularity of Canny edge image. Finally, a reformed level set evolution was introduced to extract the edge of nucleus. The experimental results show that, nucleus can be located accurately; even if the cell image has a complex background and is disturbed by much stuff. Moreover, the edge of nucleus extracted by this method has a higher accuracy.
Related Articles | Metrics
Survey on Chinese text sentiment analysis
WEI Wei XIANG Yang CHEN Qian
Journal of Computer Applications    2011, 31 (12): 3321-3323.  
Abstract901)      PDF (566KB)(4636)       Save
The sentiment analysis has aroused the interest of many researchers in recent years,since the subjective texts are useful for many applications. Sentiment analysis is to mine and analyze the subjective text, aiming to acquire valuable knowledge and information. This paper surveyed the status of the art of Chinese sentiment analysis. Firstly, the technique was introduced in detail, according to different granularity levels, namely word, sentence, and document; and the research of product review and news review were presented respectively. Then evaluation and corpus for Chinese text sentiment analysis were introduced. The difficulty and trend of Chinese text sentiment analysis were concluded finally. This paper focuses on the major methods and key technologies in this field, making detailed analysis and comparison.
Related Articles | Metrics
Research and design of Agent integrity protection mechanism on remote untrusted platform
Cui YANG Cheng-xiang TAN
Journal of Computer Applications    2009, 29 (11): 3001-3004.  
Abstract1242)      PDF (747KB)(1220)       Save
Plenty of security problems may occur when servers adopt Agent to deploy mobile codes so as to realize interactive storage between different business clients. In order to pursue a higher reliability of the software, and to make sure those Agents healthily running in an untrusted complex environment, after analyzing traditional integrity validating mechanism, combining I&A, PCC and reflection techniques, a new classified mechanism of enabling the integrity of trusted terminal Agents was proposed, and an efficient validating model with multiple interacting modules was designed, aiming at improving the reliability of the mobile codes by realizing its behaviors-monitoring.
Related Articles | Metrics
Research of user Agent mechanism in RBAC model
yueyang chen xuesen ma jianghong han
Journal of Computer Applications   
Abstract1713)      PDF (530KB)(1124)       Save
In order to solve the more and more complicated problem of permission assignment between users and services on ASP service platform, A new Role-Based Access Control (A_RBAC) model was proposed. A_RBAC model made platform services associated with enterprise level user by making use of Agent layer, adopted mechanism of user role access control of classification authorization, implemented regional autonomy of permission access in user Agent mechanism and was applied to the platform of information trusteeship of small medium enterprises with Lightweight Directory Access Protocol (LDAP) and Java 2 Platform Enterprise Edition (J2EE) in Hefei city. The results show that the complexity of permission assignment is effectively reduced.
Related Articles | Metrics
Identity verification system using JPEG 2000 real-time quantization watermarking and fingerprint recognition
JIANG Dan,XUAN Guo-rong,YANG Cheng-yun,ZHENG Yi-zhan,LIU Lian-sheng,BAI Wei-chao
Journal of Computer Applications    2005, 25 (08): 1750-1752.   DOI: 10.3724/SP.J.1087.2005.01750
Abstract1139)      PDF (151KB)(1067)       Save
The proposed JPEG 2000 real-time quantization watermarking algorithm was used in an improved online bank pension distribution system. The system was based on fingerprint recognition and digital watermarking technologies. In the client side, real-time quantization watermark was embedded into the sampled fingerprint image in the JPEG 2000 coding pipeline; then the compressed bit-stream was sent to the server side. In the server side, the watermark was extracted from the compressed bit-stream in the JPEG 2000 decoding pipeline; then the decompressed fingerprint image and extracted watermark were used to verify users identification. Experiments showed when typical fingerprint image was compressed to 1/4~1/20 of its original size, the embedded watermark could be exactly extracted, and fingerprint recognition rate remained almost the same after lossy compression. The system has a better interaction performance in the band-limited network situation, and is very promising in the E-business applications.
Related Articles | Metrics